-
Notifications
You must be signed in to change notification settings - Fork 676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add chunk application stats #12797
base: master
Are you sure you want to change the base?
Conversation
*block_hash, | ||
shard_uid.shard_id(), | ||
apply_result.stats, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Saving chunk stats here means that only chunks applied inside blocks will have their stats saved. Stateless chunk validators will not save any stats. In the future we could change it to save it somewhere else, but it's good enough for the first version.
@@ -462,7 +467,8 @@ impl DBCol { | |||
| DBCol::StateHeaders | |||
| DBCol::TransactionResultForBlock | |||
| DBCol::Transactions | |||
| DBCol::StateShardUIdMapping => true, | |||
| DBCol::StateShardUIdMapping | |||
| DBCol::ChunkApplyStats => true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope that marking this column as cold
is enough to avoid garbage collection on archival nodes? I think these stats should be kept forever on archival nodes. They are not that big and it would be nice to be able to view stats for chunks older than three epochs.
/// The stats can be read to analyze what happened during chunk application. | ||
/// - *Rows*: BlockShardId (BlockHash || ShardId) - 40 bytes | ||
/// - *Column type*: `ChunkApplyStats` | ||
ChunkApplyStats, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At first I thought that I could use ChunkHash
as a key in the database, but that doesn't really
work. The same chunk can be applied multiple times when there are missing chunks, and I think chunks
created using the same prev_block
would have the same hash (?).
@@ -648,6 +648,7 @@ impl<'a> ChainStoreUpdate<'a> { | |||
self.gc_outgoing_receipts(&block_hash, shard_id); | |||
self.gc_col(DBCol::IncomingReceipts, &block_shard_id); | |||
self.gc_col(DBCol::StateTransitionData, &block_shard_id); | |||
self.gc_col(DBCol::ChunkApplyStats, &block_shard_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we could use some other garbage collection logic to keep the stats for longer than three epochs. Maybe something similar to LatestWitnesses
where the last N witnesses are kept in the database? It's annoying that useful data like these stats disappears after three epochs, especially in tests which have to run for a few epochs. Can be changed later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed that it would be cool to keep those longer and agreed to keep the first version simple.
@@ -336,7 +327,7 @@ impl Runtime { | |||
apply_state: &ApplyState, | |||
signed_transaction: &SignedTransaction, | |||
transaction_cost: &TransactionCost, | |||
stats: &mut ApplyStats, | |||
stats: &mut ChunkApplyStatsV0, | |||
) -> Result<(Receipt, ExecutionOutcomeWithId), InvalidTxError> { | |||
let span = tracing::Span::current(); | |||
metrics::TRANSACTION_PROCESSED_TOTAL.inc(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Runtime metrics could probably be refactored so that first we collect the stats and at the very end
we record all of the stats in the metrics. Would reduce clutter in the runtime code.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #12797 +/- ##
==========================================
- Coverage 70.40% 70.39% -0.02%
==========================================
Files 851 852 +1
Lines 174188 174429 +241
Branches 174188 174429 +241
==========================================
+ Hits 122634 122785 +151
- Misses 46311 46398 +87
- Partials 5243 5246 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@@ -0,0 +1,218 @@ | |||
use std::collections::BTreeMap; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be a part of primitives? Isn't there an obvious conceptual "producer" crate which all dependents use that could hold this type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I initially put it in node-runtime
, but then I needed the struct in near-store
and that doesn't depend on node-runtime
so I moved the struct to primitives. It's a primitive struct that is used in multiple crates, so that seemed like good fit.
In the future there might be more crates that make use of these stats, maybe a custom aggregator which downloads stats from multiple nodes and aggregate them somehow. It would be nice to have a small crate that the aggregator can import without importing all of runtime.
If there's a better place for it please let me know.
/// Useful for debugging, metrics and sanity checks. | ||
#[derive(Debug, Clone, BorshSerialize, BorshDeserialize)] | ||
pub enum ChunkApplyStats { | ||
V0(ChunkApplyStatsV0), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible for us to find a way to avoid versioning headaches with this mostly internal data? I don't think it is going to be painful if we make the old data inaccessible if the schema changes, we should take advantage of that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These stats might be consumed by other services in the future - debug ui, custom stats aggregators, etc, so I wanted to have a (mostly) stable interface that they could depend on. My first thought was to make it versioned, but maybe there's other ways to go about it.
It looks like the problem of disk filling up was caused by an unrelated issue (faulty rocksdb update) combined with insufficient memory on the node. Constant crashes and rocksdb acting up caused too much data to be written to disk. The PR is ready for review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -1084,6 +1098,7 @@ pub struct ChainStoreUpdate<'a> { | |||
add_state_sync_infos: Vec<StateSyncInfo>, | |||
remove_state_sync_infos: Vec<CryptoHash>, | |||
challenged_blocks: HashSet<CryptoHash>, | |||
chunk_apply_stats: HashMap<(CryptoHash, ShardId), ChunkApplyStats>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small suggestion: shard uid is the better unique identifier of a shard. that being said it's often not readily available, in that case don't worry about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIU from now on the plan is to add new shard ids instead of increasing UId versions, so it should be unique enough. ShardId is more user friendly so I went with that.
@@ -648,6 +648,7 @@ impl<'a> ChainStoreUpdate<'a> { | |||
self.gc_outgoing_receipts(&block_hash, shard_id); | |||
self.gc_col(DBCol::IncomingReceipts, &block_shard_id); | |||
self.gc_col(DBCol::StateTransitionData, &block_shard_id); | |||
self.gc_col(DBCol::ChunkApplyStats, &block_shard_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed that it would be cool to keep those longer and agreed to keep the first version simple.
@@ -115,6 +116,7 @@ pub struct ApplyChunkResult { | |||
pub bandwidth_scheduler_state_hash: CryptoHash, | |||
/// Contracts accessed and deployed while applying the chunk. | |||
pub contract_updates: ContractUpdates, | |||
pub stats: ChunkApplyStatsV0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the versioned struct instead of the enum?
nit: please add a comment
/// Was the chunk applied as a missing chunk (apply_old_chunk) | ||
pub is_chunk_missing: bool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps use the same schema as the chunks do - have height_included as a field and a method to check if a chunk is new or old. Not a biggie.
/// Number of previous bandwidth requests (prev_bandwidth_requests.len()). | ||
pub prev_bandwidth_requests_num: u64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given this can be derived can you remove it and add a method?
inner.record_outgoing_buffer_stats(); | ||
*stats = inner.stats; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Maybe just return the stats from inner and set directly?
@@ -510,6 +533,7 @@ impl ReceiptSinkV2 { | |||
trie: &dyn TrieAccess, | |||
shard_layout: &ShardLayout, | |||
side_effects: bool, | |||
stats: &mut ChunkApplyStatsV0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Maybe the ChunkApplyStats
(without version)?
for shard in self.outgoing_buffers.shards() { | ||
let buffer = self.outgoing_buffers.to_shard(shard); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sanity check - Does this add to state witness size at all?
} | ||
|
||
match self.outgoing_metadatas.get_metadata_for_shard(&shard) { | ||
Some(metadata) if metadata.total_receipts_num() == buffer.len() => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this if do and why do you need it? Please add a comment.
processing_state.stats.transactions_num = | ||
transactions.transactions.len().try_into().unwrap(); | ||
processing_state.stats.incoming_receipts_num = incoming_receipts.len().try_into().unwrap(); | ||
processing_state.stats.is_chunk_missing = !apply_state.is_new_chunk; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: rename is_chunk_missing to is_new_chunk. It's better for both consistency and it's generally good practice to use positive expressions in variable names.
This is the first step towards per-chunk metrics (#12758).
This PR adds a new struct -
ChunkApplyStats
- which keeps information about things that happenedduring chunk application. For example how many transactions there were, how many receipts, what were
the outgoing limits, how many receipts were forwarded, buffered, etc, etc.
For now
ChunkApplyStats
contain mainly data relevant to the bandwidth scheduler, in the futuremore stats can be added to measure other things that we're interested in. I didn't want to add too
much stuff at once to keep the PR size reasonable.
There was already a struct called
ApplyStats
, but it was used only for the balance checker. Ireplaced it with
BalanceStats
insideChunkApplyStats
.ChunkApplyStats
are returned inApplyChunkResult
and saved to the database for later use. A newdatabase column is added to keep the chunk application stats. The column is included in the standard
garbage collection logic to keep the size of saved data reasonable.
Running
neard view-state chunk-apply-stats
allows node operator to view chunk application statsfor a given chunk. Example output for a mainnet chunk:
Click to expand
The stats are also available in
ChainStore
, making it easy to read them from tests.In the future we could also add an RPC endpoint to make the stats available in
debug-ui
.The PR is divided into commits for easier review.